seed rl
Playing MOBA game using Deep Reinforcement Learning -- part 2
In the last post, we learn how to train a simple MOBA game using Deep Reinforcement Learning. In this post, I am going to explain what we need to know before applying the same method to the Dota2. You just need to run the Dotaservice and that code together at same PC. Unlike Derk training, each headless environment of Dota2 requires more than 1GB of RAM memory. Therefore, it is better to use a separate PC for running only environment because DRL training is usually better when there are many environments.
Playing MOBA game using Deep Reinforcement Learning -- part 1
MOBA are currently one of the most popular game genres along with RTS and MMORPG. Unlike RTS games, MOBA games have a fixed number of units that can be controlled by the user. However, the unit can grow up through level and item system as the game progresses. In this series, we will learn how to make the Deep Reinforcement Learning agent for MOBA game with code example. The DOTA 2 is the most commonly used MOBA game for research because it has Python API and lots of reference.
Google Open Sourced this Architecture for Massively Scalable Reinforcement Learning Models
I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Deep reinforcement learning(DRL) is one of the fastest areas of research in the deep learning space. Responsible for some of the top milestones in the recent years of AI such as AlphaGo, Dota2 Five or Alpha Star, DRL seems to be the discipline that approximates human intelligence the closest.
- Leisure & Entertainment > Games (0.55)
- Information Technology (0.35)
Google's new SEED RL framework reduces AI model training costs by 80% - SiliconANGLE
Researchers at Google have open-sourced a new framework that can scale up artificial intelligence model training across thousands of machines. It's a promising development because it should enable AI algorithm training to be performed at millions of frames per second while reducing the costs of doing so by as much as 80%, Google noted in a research paper. That kind of reduction could help to level the playing field a bit for startups that previously haven't been able to compete with major players such as Google in AI. Indeed, the cost of training sophisticated machine learning models in the cloud is surprisingly expensive. One recent report by Synced found that the University of Washington racked up $25,000 in costs to train its Grover model, which is used to detect and generate fake news.
- North America > United States > California (0.05)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
- Information Technology > Services (0.53)
- Media > News (0.38)
Google open-sources framework that reduces AI training costs by up to 80%
Google researchers recently published a paper describing a framework -- SEED RL -- that scales AI model training to thousands of machines. They say that it could facilitate training at millions of frames per second on a machine while reducing costs by up to 80%, potentially leveling the playing field for startups that couldn't previously compete with large AI labs. Training sophisticated machine learning models in the cloud remains prohibitively expensive. According to a recent Synced report, the University of Washington's Grover, which is tailored for both the generation and detection of fake news, cost $25,000 to train over the course of two weeks. OpenAI racked up $256 per hour to train its GPT-2 language model, and Google spent an estimated $6,912 training BERT, a bidirectional transformer model that redefined the state of the art for 11 natural language processing tasks.